911 research outputs found

    Avalanche photodiodes and vacuum phototriodes for the electromagnetic calorimeter of the CMS experiment at the large hadron collider

    Get PDF
    The homogeneous lead tungstate electromagnetic calorimeter for the Compact Muon Solenoid detector at the Large Hadron Collider operates in a challenging radiation environment. The central region of the calorimeter uses large-area avalanche photodiodes to detect the fast blue-violet scintillation light from the crystals. The high hadron fluence in the forward region precludes the use of these photodiodes and vacuum phototriodes are used in this region. The constructional complexity of the calorimeter, which comprises 75848 individual crystals, plus the activation of material make repair during the lifetime of the detector virtually impossible. We describe here the key features and performance of the photodetectors and the quality assurance procedures that were used to ensure that the proportion of photodetectors that fail over the lifetime of CMS will be limited to a fraction of a percent

    The reconstruction of digital holograms on a computational grid

    Get PDF
    Digital holography is greatly extending the range ofholography's applications and moving it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical development and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count leads to the possibilities of larger sample volumes, while smaller-area pixels enable the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in large-scale distributed processing - provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. We describe here the reconstruction of digital holograms on a trans-national computational Grid with over 10 000 nodes available at over 100 sites. A simplistic scheme of deployment was found to provide no computational advantage over a single powerful workstation. Based on these experiences we suggest an improved strategy for workflow and job execution for the replay ofdigital holograms on a Grid

    Grid computing for the numerical reconstruction of digital holograms

    Get PDF
    Digital holography has the potential to greatly extend holography's applications and move it from the lab into the field: a single CCD or other solid-state sensor can capture any number of holograms while numerical reconstruction within a computer eliminates the need for chemical processing and readily allows further processing and visualisation of the holographic image. The steady increase in sensor pixel count and resolution leads to the possibilities of larger sample volumes and of higher spatial resolution sampling, enabling the practical use of digital off-axis holography. However this increase in pixel count also drives a corresponding expansion of the computational effort needed to numerically reconstruct such holograms to an extent where the reconstruction process for a single depth slice takes significantly longer than the capture process for each single hologram. Grid computing - a recent innovation in largescale distributed processing -provides a convenient means of harnessing significant computing resources in an ad-hoc fashion that might match the field deployment of a holographic instrument. In this paper we consider the computational needs of digital holography and discuss the deployment of numericals reconstruction software over an existing Grid testbed. The analysis of marine organisms is used as an exemplar for work flow and job execution of in-line digital holography

    Tree Contraction, Connected Components, Minimum Spanning Trees: a GPU Path to Vertex Fitting

    Get PDF
    Standard parallel computing operations are considered in the context of algorithms for solving 3D graph problems which have applications, e.g., in vertex finding in HEP. Exploiting GPUs for tree-accumulation and graph algorithms is challenging: GPUs offer extreme computational power and high memory-access bandwidth, combined with a model of fine-grained parallelism perhaps not suiting the irregular distribution of linked representations of graph data structures. Achieving data-race free computations may demand serialization through atomic transactions, inevitably producing poor parallel performance. A Minimum Spanning Tree algorithm for GPUs is presented, its implementation discussed, and its efficiency evaluated on GPU and multicore architectures

    Comparison of two-dimensional binned data distributions using the energy test

    Get PDF
    For the purposes of monitoring HEP experiments, comparison is often made between regularly acquired histograms of data and reference histograms which represent the ideal state of the equipment. With the larger experiments now starting up, there is a need for automation of this task since the volume of comparisons would overwhelm human operators. However, the two-dimensional histogram comparison tools currently available in ROOT have noticeable shortcomings. We present a new comparison test for 2D histograms, based on the Energy Test of Aslan and Zech, which provides more decisive discrimination between histograms of data coming from different distributions
    • …
    corecore